A Decomposition of Markov Processes via Group Actions

نویسندگان

چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Accelerated decomposition techniques for large discounted Markov decision processes

Many hierarchical techniques to solve large Markov decision processes (MDPs) are based on the partition of the state space into strongly connected components (SCCs) that can be classified into some levels. In each level, smaller problems named restricted MDPs are solved, and then these partial solutions are combined to obtain the global solution. In this paper, we first propose a novel algorith...

متن کامل

Nonabelian free group actions: Markov processes, the Abramov-Rohlin formula and Yuzvinskii’s formula

This paper introduces Markov chains and processes over nonabelian free groups and semigroups. We prove a formula for the f -invariant of a Markov chain over a free group in terms of transition matrices that parallels the classical formula for the entropy a Markov chain. Applications include free group analogues of the AbramovRohlin formula for skew-product actions and Yuzvinskii’s addition form...

متن کامل

Exact Decomposition Approaches for Markov Decision Processes: A Survey

As classical methods are intractable for solving Markov decision processes MDPs requiring a large state space, decomposition and aggregation techniques are very useful to cope with large problems. These techniques are in general a special case of the classic Divide-and-Conquer framework to split a large, unwieldy problem into smaller components and solving the parts in order to construct the gl...

متن کامل

Comparison of Markov processes via infinitesimal generators

We derive comparison results for Markov processes with respect to stochastic orderings induced by function classes. Our main result states that stochastic monotonicity of one process and comparability of the infinitesimal generators implies ordering of the processes. Unlike in previous work no boundedness assumptions on the function classes are needed anymore. We also present an integral versio...

متن کامل

Hierarchical Solution of Markov Decision Processes using Macro-actions

We investigate the use of temporally abstract actions, or macro-actions, in the solution of Markov decision processes. Unlike current models that combine both primitive actions and macro-actions and leave the state space unchanged, we propose a hierarchical model (using an abstract MDP) that works with macro-actions only, and that significantly reduces the size of the state space. This is achie...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Theoretical Probability

سال: 2008

ISSN: 0894-9840,1572-9230

DOI: 10.1007/s10959-008-0162-x